108 research outputs found

    A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks

    Get PDF
    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest

    Neurobiologically Inspired Mobile Robot Navigation and Planning

    Get PDF
    After a short review of biologically inspired navigation architectures, mainly relying on modeling the hippocampal anatomy, or at least some of its functions, we present a navigation and planning model for mobile robots. This architecture is based on a model of the hippocampal and prefrontal interactions. In particular, the system relies on the definition of a new cell type “transition cells” that encompasses traditional “place cells”

    A Circuit-Level Model of Hippocampal Place Field Dynamics Modulated by Entorhinal Grid and Suppression-Generating Cells

    Get PDF
    Hippocampal “place cells” and the precession of their extracellularly recorded spiking during traversal of a “place field” are well-established phenomena. More recent experiments describe associated entorhinal “grid cell” firing, but to date only conceptual models have been offered to explain the potential interactions among entorhinal cortex (EC) and hippocampus. To better understand not only spatial navigation, but mechanisms of episodic and semantic memory consolidation and reconsolidation, more detailed physiological models are needed to guide confirmatory experiments. Here, we report the results of a putative entorhinal-hippocampal circuit level model that incorporates recurrent asynchronous-irregular non-linear (RAIN) dynamics, in the context of recent in vivo findings showing specific intracellular–extracellular precession disparities and place field destabilization by entorhinal lesioning. In particular, during computer-simulated rodent maze navigation, our model demonstrate asymmetric ramp-like depolarization, increased theta power, and frequency (that can explain the phase precession disparity), and a role for STDP and KAHP channels. Additionally, we propose distinct roles for two entorhinal cell populations projecting to hippocampus. Grid cell populations transiently trigger place field activity, while tonic “suppression-generating cell” populations minimize aberrant place cell activation, and limit the number of active place cells during traversal of a given field. Applied to place-cell RAIN networks, this tonic suppression explains an otherwise seemingly discordant association with overall increased firing. The findings of this circuit level model suggest in vivo and in vitro experiments that could refute or support the proposed mechanisms of place cell dynamics and modulating influences of EC

    Space and time-related firing in a model of hippocampo-cortical interactions

    Get PDF
    International audienceIn a previous model [3], a spectral timing neural network [4] was used to account for the role of the Hs in the acquisition of classical conditioning. The ability to estimate the timing between separate events was then used to learn and predict transitions between places in the environment. We propose a neural architecture based on this work and explaining the out-of-field activities in the Hs along with their temporal prediction capabilities. The model uses the hippocampo-cortical pathway as a means to spread reward signals to entorhinal neurons. Secondary predictions of the reward signal are then learned, based on transition learning, by pyramidal neurons of the CA region

    Structure and dynamics of random recurrent neural networks

    Get PDF
    International audienceIn contradiction with Hopfield-like networks, random recurrent neural networks (RRNN), where the couplings are random, exhibit complex dynamics (limit cycles, chaos). It is possible to store information in these netwoks through hebbian learning. Eventually, learning ``destroys'' the dynamics and leads to a fixed point attractor. We investigate here the structural change in the networks through learning, and show a ``small-world'' effect

    Quoy M., “Structure and dynamics of random recurrent neural networks”, submitted

    No full text
    Abstract. In contradiction with Hopfield-like networks, random recurrent neural networks (RRNN), where the couplings are random, exhibit complex dynamics (limit cycles, chaos). It is possible to store information in these networks through hebbian learning. Eventually, learning “destroys ” the dynamics and leads to a fixed point attractor. We investigate here the structural change in the networks through learning, and show a “small-world ” effect.
    corecore